Learn to rotate: Part orientation for reducing support volume via generalizable reinforcement learning

نویسندگان

چکیده

In design for additive manufacturing, an essential task is to determine the optimal build orientation of a part according one or multiple factors. Heuristic search used by most methods select from large solution space. Search algorithms occasionally converge towards local optimum and waste considerable time on trial error. The above issues could be addressed if there was intelligent agent that knew search/rotation path given 3D model. A straightforward method construct such reinforcement learning (RL). By adopting this idea, time-consuming online searches in existing will moved offline stage, potentially improving performance. This challenging research problem because goal capable rotating arbitrary models, whereas RL agents frequently struggle generalize new scenarios. Therefore, paper suggests generalizable (GRL) framework train agent, GPU-accelerated GRL benchmark support training, testing, comparison approaches. Experimental results demonstrate proposed average outperforms others terms effectiveness efficiency. It proved have potential solve minima problems raised approaches, swiftly discover global (sub-)optimal (i.e. 2.62x 229.00x faster than random algorithm), beyond environment which it trained.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

willingness to communicate in the iranian context: language learning orientation and social support

why some learners are willing to communicate in english, concurrently others are not, has been an intensive investigation in l2 education. willingness to communicate (wtc) proposed as initiating to communicate while given a choice has recently played a crucial role in l2 learning. it was hypothesized that wtc would be associated with language learning orientations (llos) as well as social suppo...

Learning to reinforcement learn

In recent years deep reinforcement learning (RL) systems have attained superhuman performance in a number of challenging task domains. However, a major limitation of such applications is their demand for massive amounts of training data. A critical present objective is thus to develop deep RL methods that can adapt rapidly to new tasks. In the present work we introduce a novel approach to this ...

متن کامل

Learning how to Active Learn: A Deep Reinforcement Learning Approach

Active learning aims to select a small subset of data for annotation such that a classifier learned on the data is highly accurate. This is usually done using heuristic selection methods, however the effectiveness of such methods is limited and moreover, the performance of heuristics varies between datasets. To address these shortcomings, we introduce a novel formulation by reframing the active...

متن کامل

Learning to learn by gradient descent by reinforcement learning

Learning rate is a free parameter in many optimization algorithms including Stochastic Gradient Descent (SGD). Choosing a good value of learning rate is non-trivial for important non-convex problems such as training of Deep Neural Networks. In this work, we formulate the optimization process as a Partially Observable Markov Decision Process and pose the the choice of learning rate per time step...

متن کامل

Learning to trade via direct reinforcement

We present methods for optimizing portfolios, asset allocations, and trading systems based on direct reinforcement (DR). In this approach, investment decision-making is viewed as a stochastic control problem, and strategies are discovered directly. We present an adaptive algorithm called recurrent reinforcement learning (RRL) for discovering investment policies. The need to build forecasting mo...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Transactions on Industrial Informatics

سال: 2023

ISSN: ['1551-3203', '1941-0050']

DOI: https://doi.org/10.1109/tii.2023.3249751